Journal of Clinical Epidemiology
Top medRxiv preprints most likely to be published in this journal, ranked by match strength.
Show abstract
BackgroundThe number of problematic randomized clinical trials (RCTs) has risen sharply in recent decades, posing serious challenges to the integrity of the healthcare evidence ecosystem. ObjectiveTo investigate whether retraction of problematic RCTs could reduce evidence contamination. DesignRetrospective cohort study SettingA secondary analysis of the VITALITY Study database. Participants1,330 retracted RCTs with 847 systematic reviews. MeasurementsThe difference in the median number (and...
Show abstract
BackgroundJournals may respond to integrity concerns by publishing an editorial response (editorial notice, expression of concern (EoC) or retraction). We investigated whether the type of editorial response affected citation rates. MethodsWe obtained citations for 172 randomised controlled trials (RCTs) with integrity concerns (41 had editorial notices, 38 EoCs and 23 retractions) and control RCTs from the same journal and year. Monthly citation rates up to 60 months before and after editorial ...
Show abstract
BackgroundIndividual participant data (IPD) meta-analyses obtain, harmonise and synthesise the raw individual-level data from multiple studies, and are increasingly important in an era of data sharing and personalised medicine to inform clinical practice and policy. Objectives(1) Describe the landscape of IPD meta-analysis of randomised trials over time; (2) establish current practice in design, conduct, analysis and reporting for pairwise IPD meta-analysis; and (3) derive recommendations to i...
Show abstract
BackgroundThe ability of large language models (LLMs) to work collaboratively and screen studies in a systematic review (SR) is under-explored. Hence, we aimed to evaluate the effectiveness of LLMs in automating the process of screening in systematic reviews. MethodsThis is an observational study which included labeled data (title and abstracts) for five SRs. Originally, two reviewers screened the citations independently for eligibility. A third reviewer cross-checked each citation for quality ...
Show abstract
BackgroundThe exponential growth of Mendelian randomization (MR) literature has created challenges for systematically organising and synthesising evidence, with key information fragmented across heterogeneous publications. We present MR-KG, a knowledge graph resource using large language models (LLMs) to systematically extract and structure published MR evidence at scale. MethodsWe evaluated eight OpenAI and local LLMs for extracting structured information from MR study abstracts. Two reviewers...
Show abstract
ObjectiveAntimicrobial resistance (AMR) is an urgent global health threat, resulting in more than 5 million deaths globally in 2019. Timely and complete antimicrobial agent (AMA) clinical trial results reporting is essential to evaluate the safety and efficacy of investigational therapies. The Food and Drug Administration Amendments Act (FDAAA) of 2007 mandated results reporting for applicable clinical trials to ClinicalTrials.gov. After nearly ten years of underreporting, the HHS issued the Fin...
Show abstract
BackgroundThe strength and transparency of clinical trial evidence supporting drug approvals has become increasingly scrutinized, particularly considering the increased use of regulatory flexibility and expedited pathways. While U.S. Food and Drug Administration (FDA) standards have been extensively analyzed, evidence standards at the European Medicines Agency (EMA) remain less well-characterized. Thus, this study aims to systematically assess the design, quality, and outcomes of pivotal efficac...
Show abstract
The terms personalized, individualized and precision medicine are increasingly used to describe health interventions, yet their operational meaning in clinical research remains unclear. Despite extensive conceptual discussion, there is limited empirical evidence on how these labels are applied in randomized controlled trials (RCTs) and whether such trials meet standards of transparency and methodological rigor. We systematically examined 262 RCTs published between 2020 and 2022 that used the ter...
Show abstract
BackgroundIn pharmacoepidemiological studies, days of treatment (DoT) duration associated with individual electronic drug utilization records (DUR) are usually missing. Researcher-defined duration (RDD) calculation approaches, as opposed to data-driven approaches, can be used to estimate DoT based on the specific choices and assumptions made by investigators. These are usually underreported or even undocumented. We aimed to develop a framework for the standardization of terminology, formulas, im...
Show abstract
Systematic reviews are used in academia, biotechnology, pharmaceutical companies and government to synthesise and appraise large numbers of publications. The current (largely manual) workflow takes an average of 9-18 months1, at a cost of $100,000+ per review2. We built a platform, ScholaraAI, that leverages artificial intelligence to cut this to < 0.1% of the time, without compromising quality. ScholaraAI facilitates end-to-end systematic reviews; search, screening, data extraction, and analysi...
Show abstract
BackgroundRoutinely collected health data are increasingly used to generate real-world evidence for therapeutic decision-making. Yet, stakeholders, including clinicians, pharmaceutical industry representatives, patient advocacy groups, and statisticians, prioritize different aspects of data quality, analysis, and interpretation. Without explicit consideration of these perspectives, analyses risk being fragmented, misaligned with end-user needs, or lacking transparency. MethodsWe developed a sta...
Show abstract
BackgroundSystematic reviews (SRs) are essential for evidence-based medicine but require extensive time and resources for abstract screening. Large language models (LLMs) offer potential for automating this process, yet concerns about data privacy, intellectual property protection, and reproducibility limit the use of cloud-based solutions in research settings. ObjectiveTo evaluate the performance of a locally deployed 20-billion parameter LLM for automated abstract screening in systematic revi...
Show abstract
ObjectiveOur goal is to unify the 72 biomedical publication types and study designs (collectively, PTs) into a single rubric and hierarchy. Materials and MethodsThis is carried out in a data-driven manner by computing pairwise similarities of each PT against all others to form a similarity matrix. By performing hierarchical clustering we place each PT in a specific category and collect these into broader categories. ResultsSpearman correlations among PT pairs ranged from strongly negative to s...
Show abstract
The growth of generative AI and easily available Open Access health datasets has transformed researcher productivity, leading to an explosion in publications that has in part been attributed to paper mills (organisations that provide manuscripts for payment) and other unethical actors. These entities are not, however, homogenous, and have a range of products and target markets. While the demand from China has received much attention, here we provide a case study of CDC WONDER, a dataset that has...
Show abstract
IntroductionRandomised controlled trials (RCTs) investigate the safety and efficacy of interventions. It has become clear however that some RCTs include fabricated data. The INSPECT-SR tool assesses the trustworthiness of RCTs in systematic reviews of healthcare-related interventions. However, where individual participant data (IPD) can be obtained, a more thorough assessment of trustworthiness is possible. Consequently, INSPECT-SR recommends obtaining IPD to resolve uncertainties, though there ...
Show abstract
ImportanceLarge language models (LLMs) offer potential decision support, but their accuracy varies. Prompt engineering can generally enhance LLM behavior in a clinical context, yet best practices have yet to be formally explored in realistic neurology settings. ObjectiveTo evaluate the impact of structured prompting versus simple prompting on the performance of six LLMs (three closed-source: OpenAI GPT-4o, OpenAI o3, OpenAI GPT-5.2 Thinking; three open-source: Meta Llama-4-Scout-17B-16E-Instruc...
Show abstract
ObjectiveTo address the inefficiency, subjectivity, and high expertise barrier of traditional epidemiological causal inference, this study designed, developed, and validated an AI-powered agent (EpiCausalX Agent) to automate the end-to-end workflow. It integrates cross-database literature retrieval, intelligent causal reasoning, and Directed Acyclic Graph (DAG) visualization to provide a reliable, accessible tool for researchers. Materials and MethodsBuilt on the LangChain 1.0 framework with a ...
Show abstract
ObjectivesTo quantify the amount and certainty of evidence in Cochrane systematic reviews of interventions, and to describe how this evidence has evolved over time. DesignLarge-scale meta-research study Data sourceCochrane Database of Systematic Reviews (search date April 8, 2025) Eligibility criteriaCochrane systematic reviews assessing interventions reporting "Summary of findings" tables. Data extractionData were automatically extracted using web scraping and a large language model, with q...
Show abstract
Large language models (LLMs) are increasingly transforming scientific workflows, yet their application to rigorous evidence synthesis remains underexplored. Through the execution of a single Python script, we present a fully automated pipeline leveraging the Claude API to generate systematic reviews from literature search through manuscript completion without human intervention. Our pipeline processes hundreds of papers through iterative API calls for inclusion evaluation, information extraction...
Show abstract
BackgroundAutomated systems, including large language models, are increasingly used to support data extraction in diagnostic systematic reviews. However, their reliability, safety, and repeatability under realistic extraction conditions remain insufficiently characterized. ObjectiveTo benchmark the end-to-end reliability of automated systems for extracting diagnostic accuracy data from published uro-oncologic studies, with a focus on correctness, abstention behavior in non-derivable scenarios, ...